Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 27
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Front Comput Neurosci ; 17: 1258590, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37927544

RESUMO

In everyday life, the brain processes a multitude of stimuli from the surrounding environment, requiring the integration of information from different sensory modalities to form a coherent perception. This process, known as multisensory integration, enhances the brain's response to redundant congruent sensory cues. However, it is equally important for the brain to segregate sensory inputs from distinct events, to interact with and correctly perceive the multisensory environment. This problem the brain must face, known as the causal inference problem, is strictly related to multisensory integration. It is widely recognized that the ability to integrate information from different senses emerges during the developmental period, as a function of our experience with multisensory stimuli. Consequently, multisensory integrative abilities are altered in individuals who have atypical experiences with cross-modal cues, such as those on the autistic spectrum. However, no research has been conducted on the developmental trajectories of causal inference and its relationship with experience thus far. Here, we used a neuro-computational model to simulate and investigate the development of causal inference in both typically developing children and those in the autistic spectrum. Our results indicate that higher exposure to cross-modal cues accelerates the acquisition of causal inference abilities, and a minimum level of experience with multisensory stimuli is required to develop fully mature behavior. We then simulated the altered developmental trajectory of causal inference in individuals with autism by assuming reduced multisensory experience during training. The results suggest that causal inference reaches complete maturity much later in these individuals compared to neurotypical individuals. Furthermore, we discuss the underlying neural mechanisms and network architecture involved in these processes, highlighting that the development of causal inference follows the evolution of the mechanisms subserving multisensory integration. Overall, this study provides a computational framework, unifying causal inference and multisensory integration, which allows us to suggest neural mechanisms and provide testable predictions about the development of such abilities in typically developed and autistic children.

2.
Front Neural Circuits ; 16: 933455, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36439678

RESUMO

Vision and touch both support spatial information processing. These sensory systems also exhibit highly specific interactions in spatial perception, which may reflect multisensory representations that are learned through visuo-tactile (VT) experiences. Recently, Wani and colleagues reported that task-irrelevant visual cues bias tactile perception, in a brightness-dependent manner, on a task requiring participants to detect unimanual and bimanual cues. Importantly, tactile performance remained spatially biased after VT exposure, even when no visual cues were presented. These effects on bimanual touch conceivably reflect cross-modal learning, but the neural substrates that are changed by VT experience are unclear. We previously described a neural network capable of simulating VT spatial interactions. Here, we exploited this model to test different hypotheses regarding potential network-level changes that may underlie the VT learning effects. Simulation results indicated that VT learning effects are inconsistent with plasticity restricted to unisensory visual and tactile hand representations. Similarly, VT learning effects were also inconsistent with changes restricted to the strength of inter-hemispheric inhibitory interactions. Instead, we found that both the hand representations and the inter-hemispheric inhibitory interactions need to be plastic to fully recapitulate VT learning effects. Our results imply that crossmodal learning of bimanual spatial perception involves multiple changes distributed over a VT processing cortical network.


Assuntos
Processamento Espacial , Percepção do Tato , Humanos , Tato , Percepção Visual , Percepção Espacial
3.
Multisens Res ; 32(2): 111-144, 2019 01 01.
Artigo em Inglês | MEDLINE | ID: mdl-31059469

RESUMO

Results in the recent literature suggest that multisensory integration in the brain follows the rules of Bayesian inference. However, how neural circuits can realize such inference and how it can be learned from experience is still the subject of active research. The aim of this work is to use a recent neurocomputational model to investigate how the likelihood and prior can be encoded in synapses, and how they affect audio-visual perception, in a variety of conditions characterized by different experience, different cue reliabilities and temporal asynchrony. The model considers two unisensory networks (auditory and visual) with plastic receptive fields and plastic crossmodal synapses, trained during a learning period. During training visual and auditory stimuli are more frequent and more tuned close to the fovea. Model simulations after training have been performed in crossmodal conditions to assess the auditory and visual perception bias: visual stimuli were positioned at different azimuth (±10° from the fovea) coupled with an auditory stimulus at various audio-visual distances (±20°). The cue reliability has been altered by using visual stimuli with two different contrast levels. Model predictions are compared with behavioral data. Results show that model predictions agree with behavioral data, in a variety of conditions characterized by a different role of prior and likelihood. Finally, the effect of a different unimodal or crossmodal prior, re-learning, temporal correlation among input stimuli, and visual damage (hemianopia) are tested, to reveal the possible use of the model in the clarification of important multisensory problems.


Assuntos
Percepção Auditiva/fisiologia , Aprendizagem/fisiologia , Modelos Neurológicos , Percepção Espacial/fisiologia , Estimulação Acústica , Teorema de Bayes , Humanos , Estimulação Luminosa
4.
J Neurosci ; 39(8): 1374-1385, 2019 02 20.
Artigo em Inglês | MEDLINE | ID: mdl-30573648

RESUMO

Mature multisensory superior colliculus (SC) neurons integrate information across the senses to enhance their responses to spatiotemporally congruent cross-modal stimuli. The development of this neurotypic feature of SC neurons requires experience with cross-modal cues. In the absence of such experience the response of an SC neuron to congruent cross-modal cues is no more robust than its response to the most effective component cue. This "default" or "naive" state is believed to be one in which cross-modal signals do not interact. The present results challenge this characterization by identifying interactions between visual-auditory signals in male and female cats reared without visual-auditory experience. By manipulating the relative effectiveness of the visual and auditory cross-modal cues that were presented to each of these naive neurons, an active competition between cross-modal signals was revealed. Although contrary to current expectations, this result is explained by a neuro-computational model in which the default interaction is mutual inhibition. These findings suggest that multisensory neurons at all maturational stages are capable of some form of multisensory integration, and use experience with cross-modal stimuli to transition from their initial state of competition to their mature state of cooperation. By doing so, they develop the ability to enhance the physiological salience of cross-modal events thereby increasing their impact on the sensorimotor circuitry of the SC, and the likelihood that biologically significant events will elicit SC-mediated overt behaviors.SIGNIFICANCE STATEMENT The present results demonstrate that the default mode of multisensory processing in the superior colliculus is competition, not non-integration as previously characterized. A neuro-computational model explains how these competitive dynamics can be implemented via mutual inhibition, and how this default mode is superseded by the emergence of cooperative interactions during development.


Assuntos
Percepção Auditiva/fisiologia , Colículos Superiores/fisiologia , Percepção Visual/fisiologia , Estimulação Acústica , Animais , Gatos , Sinais (Psicologia) , Escuridão , Feminino , Masculino , Modelos Neurológicos , Neurônios/fisiologia , Estimulação Luminosa , Privação Sensorial/fisiologia
5.
Cogn Neurodyn ; 12(6): 525-547, 2018 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-30483362

RESUMO

According with a featural organization of semantic memory, this work is aimed at investigating, through an attractor network, the role of different kinds of features in the representation of concepts, both in normal and neurodegenerative conditions. We implemented new synaptic learning rules in order to take into account the role of partially shared features and of distinctive features with different saliency. The model includes semantic and lexical layers, coding, respectively for object features and word-forms. Connections among nodes are strongly asymmetrical. To account for the feature saliency, asymmetrical synapses were created using Hebbian rules of potentiation and depotentiation, setting different pre-synaptic and post-synaptic thresholds. A variable post-synaptic threshold, which automatically changed to reflect the feature frequency in different concepts (i.e., how many concepts share a feature), was used to account for partially shared features. The trained network solved naming tasks and word recognition tasks very well, exploiting the different role of salient versus marginal features in concept identification. In the case of damage, superordinate concepts were preserved better than the subordinate ones. Interestingly, the degradation of salient features, but not of marginal ones, prevented object identification. The model suggests that Hebbian rules, with adjustable post-synaptic thresholds, can provide a reliable semantic representation of objects exploiting the statistics of input features.

6.
J Neurosci ; 38(14): 3453-3465, 2018 04 04.
Artigo em Inglês | MEDLINE | ID: mdl-29496891

RESUMO

The ability to integrate information across multiple senses enhances the brain's ability to detect, localize, and identify external events. This process has been well documented in single neurons in the superior colliculus (SC), which synthesize concordant combinations of visual, auditory, and/or somatosensory signals to enhance the vigor of their responses. This increases the physiological salience of crossmodal events and, in turn, the speed and accuracy of SC-mediated behavioral responses to them. However, this capability is not an innate feature of the circuit and only develops postnatally after the animal acquires sufficient experience with covariant crossmodal events to form links between their modality-specific components. Of critical importance in this process are tectopetal influences from association cortex. Recent findings suggest that, despite its intuitive appeal, a simple generic associative rule cannot explain how this circuit develops its ability to integrate those crossmodal inputs to produce enhanced multisensory responses. The present neurocomputational model explains how this development can be understood as a transition from a default state in which crossmodal SC inputs interact competitively to one in which they interact cooperatively. Crucial to this transition is the operation of a learning rule requiring coactivation among tectopetal afferents for engagement. The model successfully replicates findings of multisensory development in normal cats and cats of either sex reared with special experience. In doing so, it explains how the cortico-SC projections can use crossmodal experience to craft the multisensory integration capabilities of the SC and adapt them to the environment in which they will be used.SIGNIFICANCE STATEMENT The brain's remarkable ability to integrate information across the senses is not present at birth, but typically develops in early life as experience with crossmodal cues is acquired. Recent empirical findings suggest that the mechanisms supporting this development must be more complex than previously believed. The present work integrates these data with what is already known about the underlying circuit in the midbrain to create and test a mechanistic model of multisensory development. This model represents a novel and comprehensive framework that explains how midbrain circuits acquire multisensory experience and reveals how disruptions in this neurotypic developmental trajectory yield divergent outcomes that will affect the multisensory processing capabilities of the mature brain.


Assuntos
Mesencéfalo/fisiologia , Modelos Neurológicos , Percepção , Animais , Gatos , Feminino , Aprendizagem , Masculino
7.
Front Hum Neurosci ; 11: 518, 2017.
Artigo em Inglês | MEDLINE | ID: mdl-29163099

RESUMO

Failure to appropriately develop multisensory integration (MSI) of audiovisual speech may affect a child's ability to attain optimal communication. Studies have shown protracted development of MSI into late-childhood and identified deficits in MSI in children with an autism spectrum disorder (ASD). Currently, the neural basis of acquisition of this ability is not well understood. Here, we developed a computational model informed by neurophysiology to analyze possible mechanisms underlying MSI maturation, and its delayed development in ASD. The model posits that strengthening of feedforward and cross-sensory connections, responsible for the alignment of auditory and visual speech sound representations in posterior superior temporal gyrus/sulcus, can explain behavioral data on the acquisition of MSI. This was simulated by a training phase during which the network was exposed to unisensory and multisensory stimuli, and projections were crafted by Hebbian rules of potentiation and depression. In its mature architecture, the network also reproduced the well-known multisensory McGurk speech effect. Deficits in audiovisual speech perception in ASD were well accounted for by fewer multisensory exposures, compatible with a lack of attention, but not by reduced synaptic connectivity or synaptic plasticity.

8.
Front Comput Neurosci ; 11: 89, 2017.
Artigo em Inglês | MEDLINE | ID: mdl-29046631

RESUMO

The brain integrates information from different sensory modalities to generate a coherent and accurate percept of external events. Several experimental studies suggest that this integration follows the principle of Bayesian estimate. However, the neural mechanisms responsible for this behavior, and its development in a multisensory environment, are still insufficiently understood. We recently presented a neural network model of audio-visual integration (Neural Computation, 2017) to investigate how a Bayesian estimator can spontaneously develop from the statistics of external stimuli. Model assumes the presence of two unimodal areas (auditory and visual) topologically organized. Neurons in each area receive an input from the external environment, computed as the inner product of the sensory-specific stimulus and the receptive field synapses, and a cross-modal input from neurons of the other modality. Based on sensory experience, synapses were trained via Hebbian potentiation and a decay term. Aim of this work is to improve the previous model, including a more realistic distribution of visual stimuli: visual stimuli have a higher spatial accuracy at the central azimuthal coordinate and a lower accuracy at the periphery. Moreover, their prior probability is higher at the center, and decreases toward the periphery. Simulations show that, after training, the receptive fields of visual and auditory neurons shrink to reproduce the accuracy of the input (both at the center and at the periphery in the visual case), thus realizing the likelihood estimate of unimodal spatial position. Moreover, the preferred positions of visual neurons contract toward the center, thus encoding the prior probability of the visual input. Finally, a prior probability of the co-occurrence of audio-visual stimuli is encoded in the cross-modal synapses. The model is able to simulate the main properties of a Bayesian estimator and to reproduce behavioral data in all conditions examined. In particular, in unisensory conditions the visual estimates exhibit a bias toward the fovea, which increases with the level of noise. In cross modal conditions, the SD of the estimates decreases when using congruent audio-visual stimuli, and a ventriloquism effect becomes evident in case of spatially disparate stimuli. Moreover, the ventriloquism decreases with the eccentricity.

9.
Eur J Neurosci ; 46(9): 2481-2498, 2017 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-28949035

RESUMO

Recently, experimental and theoretical research has focused on the brain's abilities to extract information from a noisy sensory environment and how cross-modal inputs are processed to solve the causal inference problem to provide the best estimate of external events. Despite the empirical evidence suggesting that the nervous system uses a statistically optimal and probabilistic approach in addressing these problems, little is known about the brain's architecture needed to implement these computations. The aim of this work was to realize a mathematical model, based on physiologically plausible hypotheses, to analyze the neural mechanisms underlying multisensory perception and causal inference. The model consists of three layers topologically organized: two encode auditory and visual stimuli, separately, and are reciprocally connected via excitatory synapses and send excitatory connections to the third downstream layer. This synaptic organization realizes two mechanisms of cross-modal interactions: the first is responsible for the sensory representation of the external stimuli, while the second solves the causal inference problem. We tested the network by comparing its results to behavioral data reported in the literature. Among others, the network can account for the ventriloquism illusion, the pattern of sensory bias and the percept of unity as a function of the spatial auditory-visual distance, and the dependence of the auditory error on the causal inference. Finally, simulations results are consistent with probability matching as the perceptual strategy used in auditory-visual spatial localization tasks, agreeing with the behavioral data. The model makes untested predictions that can be investigated in future behavioral experiments.


Assuntos
Percepção Auditiva , Redes Neurais de Computação , Percepção Visual , Percepção Auditiva/fisiologia , Encéfalo/fisiologia , Humanos , Ilusões/fisiologia , Modelos Neurológicos , Sinapses/fisiologia , Percepção Visual/fisiologia
10.
Neural Comput ; 29(3): 735-782, 2017 03.
Artigo em Inglês | MEDLINE | ID: mdl-28095201

RESUMO

Recent theoretical and experimental studies suggest that in multisensory conditions, the brain performs a near-optimal Bayesian estimate of external events, giving more weight to the more reliable stimuli. However, the neural mechanisms responsible for this behavior, and its progressive maturation in a multisensory environment, are still insufficiently understood. The aim of this letter is to analyze this problem with a neural network model of audiovisual integration, based on probabilistic population coding-the idea that a population of neurons can encode probability functions to perform Bayesian inference. The model consists of two chains of unisensory neurons (auditory and visual) topologically organized. They receive the corresponding input through a plastic receptive field and reciprocally exchange plastic cross-modal synapses, which encode the spatial co-occurrence of visual-auditory inputs. A third chain of multisensory neurons performs a simple sum of auditory and visual excitations. The work includes a theoretical part and a computer simulation study. We show how a simple rule for synapse learning (consisting of Hebbian reinforcement and a decay term) can be used during training to shrink the receptive fields and encode the unisensory likelihood functions. Hence, after training, each unisensory area realizes a maximum likelihood estimate of stimulus position (auditory or visual). In cross-modal conditions, the same learning rule can encode information on prior probability into the cross-modal synapses. Computer simulations confirm the theoretical results and show that the proposed network can realize a maximum likelihood estimate of auditory (or visual) positions in unimodal conditions and a Bayesian estimate, with moderate deviations from optimality, in cross-modal conditions. Furthermore, the model explains the ventriloquism illusion and, looking at the activity in the multimodal neurons, explains the automatic reweighting of auditory and visual inputs on a trial-by-trial basis, according to the reliability of the individual cues.


Assuntos
Teorema de Bayes , Encéfalo/citologia , Aprendizagem/fisiologia , Modelos Neurológicos , Neurônios/fisiologia , Sinapses/fisiologia , Estimulação Acústica , Vias Aferentes/fisiologia , Animais , Encéfalo/fisiologia , Simulação por Computador , Humanos , Estimulação Luminosa
11.
Front Comput Neurosci ; 11: 113, 2017.
Artigo em Inglês | MEDLINE | ID: mdl-29326578

RESUMO

Hemianopic patients exhibit visual detection improvement in the blind field when audiovisual stimuli are given in spatiotemporally coincidence. Beyond this "online" multisensory improvement, there is evidence of long-lasting, "offline" effects induced by audiovisual training: patients show improved visual detection and orientation after they were trained to detect and saccade toward visual targets given in spatiotemporal proximity with auditory stimuli. These effects are ascribed to the Superior Colliculus (SC), which is spared in these patients and plays a pivotal role in audiovisual integration and oculomotor behavior. Recently, we developed a neural network model of audiovisual cortico-collicular loops, including interconnected areas representing the retina, striate and extrastriate visual cortices, auditory cortex, and SC. The network simulated unilateral V1 lesion with possible spared tissue and reproduced "online" effects. Here, we extend the previous network to shed light on circuits, plastic mechanisms, and synaptic reorganization that can mediate the training effects and functionally implement visual rehabilitation. The network is enriched by the oculomotor SC-brainstem route, and Hebbian mechanisms of synaptic plasticity, and is used to test different training paradigms (audiovisual/visual stimulation in eye-movements/fixed-eyes condition) on simulated patients. Results predict different training effects and associate them to synaptic changes in specific circuits. Thanks to the SC multisensory enhancement, the audiovisual training is able to effectively strengthen the retina-SC route, which in turn can foster reinforcement of the SC-brainstem route (this occurs only in eye-movements condition) and reinforcement of the SC-extrastriate route (this occurs in presence of survived V1 tissue, regardless of eye condition). The retina-SC-brainstem circuit may mediate compensatory effects: the model assumes that reinforcement of this circuit can translate visual stimuli into short-latency saccades, possibly moving the stimuli into visual detection regions. The retina-SC-extrastriate circuit is related to restitutive effects: visual stimuli can directly elicit visual detection with no need for eye movements. Model predictions and assumptions are critically discussed in view of existing behavioral and neurophysiological data, forecasting that other oculomotor compensatory mechanisms, beyond short-latency saccades, are likely involved, and stimulating future experimental and theoretical investigations.

12.
Neuropsychologia ; 91: 120-140, 2016 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-27424274

RESUMO

Hemianopic patients retain some abilities to integrate audiovisual stimuli in the blind hemifield, showing both modulation of visual perception by auditory stimuli and modulation of auditory perception by visual stimuli. Indeed, conscious detection of a visual target in the blind hemifield can be improved by a spatially coincident auditory stimulus (auditory enhancement of visual detection), while a visual stimulus in the blind hemifield can improve localization of a spatially coincident auditory stimulus (visual enhancement of auditory localization). To gain more insight into the neural mechanisms underlying these two perceptual phenomena, we propose a neural network model including areas of neurons representing the retina, primary visual cortex (V1), extrastriate visual cortex, auditory cortex and the Superior Colliculus (SC). The visual and auditory modalities in the network interact via both direct cortical-cortical connections and subcortical-cortical connections involving the SC; the latter, in particular, integrates visual and auditory information and projects back to the cortices. Hemianopic patients were simulated by unilaterally lesioning V1, and preserving spared islands of V1 tissue within the lesion, to analyze the role of residual V1 neurons in mediating audiovisual integration. The network is able to reproduce the audiovisual phenomena in hemianopic patients, linking perceptions to neural activations, and disentangles the individual contribution of specific neural circuits and areas via sensitivity analyses. The study suggests i) a common key role of SC-cortical connections in mediating the two audiovisual phenomena; ii) a different role of visual cortices in the two phenomena: auditory enhancement of conscious visual detection being conditional on surviving V1 islands, while visual enhancement of auditory localization persisting even after complete V1 damage. The present study may contribute to advance understanding of the audiovisual dialogue between cortical and subcortical structures in healthy and unisensory deficit conditions.


Assuntos
Percepção Auditiva/fisiologia , Córtex Cerebral/fisiopatologia , Hemianopsia/fisiopatologia , Redes Neurais de Computação , Colículos Superiores/fisiopatologia , Percepção Visual/fisiologia , Córtex Cerebral/fisiologia , Simulação por Computador , Humanos , Modelos Neurológicos , Vias Neurais/fisiologia , Vias Neurais/fisiopatologia , Neurônios/fisiologia , Colículos Superiores/fisiologia , Sinapses/fisiologia
13.
Neural Netw ; 63: 234-53, 2015 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-25569782

RESUMO

The present work investigates how complex semantics can be extracted from the statistics of input features, using an attractor neural network. The study is focused on how feature dominance and feature distinctiveness can be naturally coded using Hebbian training, and how similarity among objects can be managed. The model includes a lexical network (which represents word-forms) and a semantic network composed of several areas: each area is topologically organized (similarity) and codes for a different feature. Synapses in the model are created using Hebb rules with different values for pre-synaptic and post-synaptic thresholds, producing patterns of asymmetrical synapses. This work uses a simple taxonomy of schematic objects (i.e., a vector of features), with shared features (to realize categories) and distinctive features (to have individual members) with different frequency of occurrence. The trained network can solve simple object recognition tasks and object naming tasks by maintaining a distinction between categories and their members, and providing a different role for dominant features vs. marginal features. Marginal features are not evoked in memory when thinking of objects, but they facilitate the reconstruction of objects when provided as input. Finally, the topological organization of features allows the recognition of objects with some modified features.


Assuntos
Cognição , Modelos Neurológicos , Redes Neurais de Computação , Semântica
14.
Neural Netw ; 60: 141-65, 2014 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-25218929

RESUMO

The Brain's ability to integrate information from different modalities (multisensory integration) is fundamental for accurate sensory experience and efficient interaction with the environment: it enhances detection of external stimuli, disambiguates conflict situations, speeds up responsiveness, facilitates processes of memory retrieval and object recognition. Multisensory integration operates at several brain levels: in subcortical structures (especially the Superior Colliculus), in higher-level associative cortices (e.g., posterior parietal regions), and even in early cortical areas (such as primary cortices) traditionally considered to be purely unisensory. Because of complex non-linear mechanisms of brain integrative phenomena, a key tool for their understanding is represented by neurocomputational models. This review examines different modelling principles and architectures, distinguishing the models on the basis of their aims: (i) Bayesian models based on probabilities and realizing optimal estimator of external cues; (ii) biologically inspired models of multisensory integration in the Superior Colliculus and in the Cortex, both at level of single neuron and network of neurons, with emphasis on physiological mechanisms and architectural schemes; among the latter, some models exhibit synaptic plasticity and reproduce development of integrative capabilities via Hebbian-learning rules or self-organizing maps; (iii) models of semantic memory that implement object meaning as a fusion between sensory-motor features (embodied cognition). This overview paves the way to future challenges, such as reconciling neurophysiological and Bayesian models into a unifying theory, and stimulates upcoming research in both theoretical and applicative domains.


Assuntos
Percepção Auditiva/fisiologia , Encéfalo/fisiologia , Modelos Neurológicos , Percepção Visual/fisiologia , Humanos , Memória , Neurônios/fisiologia , Integração de Sistemas
15.
Neuroimage ; 92: 248-66, 2014 May 15.
Artigo em Inglês | MEDLINE | ID: mdl-24518261

RESUMO

Perception of the external world is based on the integration of inputs from different sensory modalities. Recent experimental findings suggest that this phenomenon is present in lower-level cortical areas at early processing stages. The mechanisms underlying these early processes and the organization of the underlying circuitries are still a matter of debate. Here, we investigate audiovisual interactions by means of a simple neural network consisting of two layers of visual and auditory neurons. We suggest that the spatial and temporal aspects of audio-visual illusions can be explained within this simple framework, based on two main assumptions: auditory and visual neurons communicate via excitatory synapses; and spatio-temporal receptive fields are different in the two modalities, auditory processing exhibiting a higher temporal resolution, while visual processing a higher spatial acuity. With these assumptions, the model is able: i) to simulate the sound-induced flash fission illusion; ii) to reproduce psychometric curves assuming a random variability in some parameters; iii) to account for other audio-visual illusions, such as the sound-induced flash fusion and the ventriloquism illusions; and iv) to predict that visual and auditory stimuli are combined optimally in multisensory integration. In sum, the proposed model provides a unifying summary of spatio-temporal audio-visual interactions, being able to both account for a wide set of empirical findings, and be a framework for future experiments. In perspective, it may be used to understand the neural basis of Bayesian audio-visual inference.


Assuntos
Estimulação Acústica/métodos , Córtex Auditivo/fisiologia , Percepção Auditiva/fisiologia , Ilusões/fisiologia , Modelos Neurológicos , Córtex Visual/fisiologia , Percepção Visual/fisiologia , Simulação por Computador , Sinais (Psicologia) , Humanos , Rede Nervosa/fisiologia , Vias Neurais/fisiologia
16.
J Integr Neurosci ; 12(4): 401-25, 2013 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-24372062

RESUMO

An important issue in semantic memory models is the formation of categories and taxonomies, and the different role played by shared vs. distinctive and salient vs. marginal features. Aim of this work is to extend our previous model to critically discuss the mechanisms leading to the formation of categories, and to investigate how feature saliency can be learned from past experience. The model assumes that an object is represented as a collection of features, which belong to different cortical areas and are topologically organized. Excitatory synapses among features are created on the basis of past experience of object presentation, with a Hebbian paradigm, including the use of potentiation and depression of synapses, and thresholding for the presynaptic and postsynaptic. The model was trained using simple schematic objects as input (i.e., vector of features) having some shared features (so as to realize a simple category) and some distinctive features with different frequency. Three different taxonomies of objects were separately trained and tested, which differ as to the number of correlated features and the structure of categories. Results show that categories can be formed from past experience, using Hebbian rules with a different threshold for postsynaptic and presynaptic activity. Furthermore, features have a different saliency, as a consequence of their different frequency during training. The trained network is able to solve simple object recognition tasks, by maintaining a distinction between categories and individual members in the category, and providing a different role for salient features vs. not-salient features. In particular, not-salient features are not evoked in memory when thinking about the object, but they facilitate the reconstruction of objects when provided as input to the model. The results can provide indications on which neural mechanisms can be exploited to form robust categories among objects and on which mechanisms could be implemented in artificial connectionist systems to extract concepts and categories from a continuous stream of input objects (each represented as a vector of features).


Assuntos
Encéfalo/fisiologia , Simulação por Computador , Aprendizagem/fisiologia , Modelos Neurológicos , Sinapses/fisiologia , Animais , Humanos , Redes Neurais de Computação
17.
PLoS One ; 7(8): e42503, 2012.
Artigo em Inglês | MEDLINE | ID: mdl-22880007

RESUMO

Presenting simultaneous but spatially discrepant visual and auditory stimuli induces a perceptual translocation of the sound towards the visual input, the ventriloquism effect. General explanation is that vision tends to dominate over audition because of its higher spatial reliability. The underlying neural mechanisms remain unclear. We address this question via a biologically inspired neural network. The model contains two layers of unimodal visual and auditory neurons, with visual neurons having higher spatial resolution than auditory ones. Neurons within each layer communicate via lateral intra-layer synapses; neurons across layers are connected via inter-layer connections. The network accounts for the ventriloquism effect, ascribing it to a positive feedback between the visual and auditory neurons, triggered by residual auditory activity at the position of the visual stimulus. Main results are: i) the less localized stimulus is strongly biased toward the most localized stimulus and not vice versa; ii) amount of the ventriloquism effect changes with visual-auditory spatial disparity; iii) ventriloquism is a robust behavior of the network with respect to parameter value changes. Moreover, the model implements Hebbian rules for potentiation and depression of lateral synapses, to explain ventriloquism aftereffect (that is, the enduring sound shift after exposure to spatially disparate audio-visual stimuli). By adaptively changing the weights of lateral synapses during cross-modal stimulation, the model produces post-adaptive shifts of auditory localization that agree with in-vivo observations. The model demonstrates that two unimodal layers reciprocally interconnected may explain ventriloquism effect and aftereffect, even without the presence of any convergent multimodal area. The proposed study may provide advancement in understanding neural architecture and mechanisms at the basis of visual-auditory integration in the spatial realm.


Assuntos
Percepção Auditiva/fisiologia , Modelos Neurológicos , Rede Nervosa/fisiologia , Percepção Visual/fisiologia , Estimulação Acústica , Estimulação Luminosa
18.
Front Psychol ; 2: 77, 2011.
Artigo em Inglês | MEDLINE | ID: mdl-21687448

RESUMO

In this paper, we present two neural network models - devoted to two specific and widely investigated aspects of multisensory integration - in order to evidence the potentialities of computational models to gain insight into the neural mechanisms underlying organization, development, and plasticity of multisensory integration in the brain. The first model considers visual-auditory interaction in a midbrain structure named superior colliculus (SC). The model is able to reproduce and explain the main physiological features of multisensory integration in SC neurons and to describe how SC integrative capability - not present at birth - develops gradually during postnatal life depending on sensory experience with cross-modal stimuli. The second model tackles the problem of how tactile stimuli on a body part and visual (or auditory) stimuli close to the same body part are integrated in multimodal parietal neurons to form the perception of peripersonal (i.e., near) space. The model investigates how the extension of peripersonal space - where multimodal integration occurs - may be modified by experience such as use of a tool to interact with the far space. The utility of the modeling approach relies on several aspects: (i) The two models, although devoted to different problems and simulating different brain regions, share some common mechanisms (lateral inhibition and excitation, non-linear neuron characteristics, recurrent connections, competition, Hebbian rules of potentiation and depression) that may govern more generally the fusion of senses in the brain, and the learning and plasticity of multisensory integration. (ii) The models may help interpretation of behavioral and psychophysical responses in terms of neural activity and synaptic connections. (iii) The models can make testable predictions that can help guiding future experiments in order to validate, reject, or modify the main assumptions.

19.
Exp Brain Res ; 213(2-3): 341-9, 2011 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-21556818

RESUMO

Multisensory neurons in cat SC exhibit significant postnatal maturation. The first multisensory neurons to appear have large receptive fields (RFs) and cannot integrate information across sensory modalities. During the first several months of postnatal life RFs contract, responses become more robust and neurons develop the capacity for multisensory integration. Recent data suggest that these changes depend on both sensory experience and active inputs from association cortex. Here, we extend a computational model we developed (Cuppini et al. in Front Integr Neurosci 22: 4-6, 2010) using a limited set of biologically realistic assumptions to describe how this maturational process might take place. The model assumes that during early life, cortical-SC synapses are present but not active and that responses are driven by non-cortical inputs with very large RFs. Sensory experience is modeled by a "training phase" in which the network is repeatedly exposed to modality-specific and cross-modal stimuli at different locations. Cortical-SC synaptic weights are modified during this period as a result of Hebbian rules of potentiation and depression. The result is that RFs are reduced in size and neurons become capable of responding in adult-like fashion to modality-specific and cross-modal stimuli.


Assuntos
Mapeamento Encefálico , Modelos Biológicos , Neurônios/fisiologia , Percepção/fisiologia , Colículos Superiores/citologia , Colículos Superiores/crescimento & desenvolvimento , Potenciais de Ação , Animais , Animais Recém-Nascidos , Gatos , Aprendizagem/fisiologia , Estimulação Física
20.
Cogn Neurodyn ; 5(2): 183-207, 2011 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-22654990

RESUMO

This work presents a connectionist model of the semantic-lexical system. Model assumes that the lexical and semantic aspects of language are memorized in two distinct stores, and are then linked together on the basis of previous experience, using physiological learning mechanisms. Particular characteristics of the model are: (1) the semantic aspects of an object are described by a collection of features, whose number may vary between objects. (2) Individual features are topologically organized to implement a similarity principle. (3) Gamma-band synchronization is used to segment different objects simultaneously. (4) The model is able to simulate the formation of categories, assuming that objects belong to the same category if they share some features. (5) Homosynaptic potentiation and homosynaptic depression are used within the semantic network, to create an asymmetric pattern of synapses; this allows a different role to be assigned to shared and distinctive features during object reconstruction. (6) Features which frequently occurred together, and the corresponding word-forms, become linked via reciprocal excitatory synapses. (7) Features in the semantic network tend to inhibit words not associated with them during the previous learning phase. Simulations show that, after learning, presentation of a cue can evoke the overall object and the corresponding word in the lexical area. Word presentation, in turn, activates the corresponding features in the sensory-motor areas, recreating the same conditions occurred during learning, according to a grounded cognition viewpoint. Several words and their conceptual description can coexist in the lexical-semantic system exploiting gamma-band time division. Schematic exempla are shown, to illustrate the possibility to distinguish between words representing a category, and words representing individual members and to evaluate the role of gamma-band synchronization in priming. Finally, the model is used to simulate patients with focalized lesions, assuming a damage of synaptic strength in specific feature areas. Results are critically discussed in view of future model extensions and application to real objects. The model represents an original effort to incorporate many basic ideas, found in recent conceptual theories, within a single quantitative scaffold.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...